3D hypothesis clustering for cross-view matching in multi-person motion capture
نویسندگان
چکیده
منابع مشابه
Multi-view Video Capture of Garment Motion
We present an image-based algorithm for surface reconstruction of moving garments from multiple calibrated video cameras. Using a color-coded cloth texture, we reliably match circular features between different camera views. As surface model we use an a priori known triangle mesh. By identifying the mesh vertices with texture elements we obtain a coherent parametrization of the surface over tim...
متن کاملMarkerless Motion Capture with Multi-view Structured Light
We present a multi-view structured light system for markerless motion capture of human subjects. In contrast to existing approaches that use multiple camera streams, we reconstruct the scene by combining six partial 3D scans generated from three structured light stations surrounding the subject and operating in a round robin fashion. We avoid interference between multiple projectors through tim...
متن کاملActivity-based methods for person recognition in motion capture sequences
In this paper we present two algorithms for efficient person recognition operating upon motion capture data, depicting persons performing various everyday activities. The first approach is driven from the assumption that, if two motion sequences depict a certain activity performed by the same person, then, consecutive frames (poses) of one sequence are expected to be similar to consecutive fram...
متن کاملMulti-Channel Pyramid Person Matching Network for Person Re-Identification
In this work, we present a Multi-Channel deep convolutional Pyramid Person Matching Network (MC-PPMN) based on the combination of the semantic-components and the colortexture distributions to address the problem of person reidentification. In particular, we learn separate deep representations for semantic-components and color-texture distributions from two person images and then employ pyramid ...
متن کاملUnlabelled 3D Motion Examples Improve Cross-View Action Recognition
(a) (b) Figure 1: (a) We exploit the visual similarity between mocap-generated trajectories (left) and dense trajectories (right) to improve cross-view action recognition. (b) For mocap-trajectories, we can easily obtain corresponding features (i.e., descriptors for trajectories that originate from the same 3D point) in two views. We use these pairs of features to learn the transformation funct...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Computational Visual Media
سال: 2020
ISSN: 2096-0433,2096-0662
DOI: 10.1007/s41095-020-0171-y